ai capability
The Loss of Control Playbook: Degrees, Dynamics, and Preparedness
Stix, Charlotte, Hallensleben, Annika, Ortega, Alejandro, Pistillo, Matteo
This research report addresses the absence of an actionable definition for Loss of Control (LoC) in AI systems by developing a novel taxonomy and preparedness framework. Despite increasing policy and research attention, existing LoC definitions vary significantly in scope and timeline, hindering effective LoC assessment and mitigation. To address this issue, we draw from an extensive literature review and propose a graded LoC taxonomy, based on the metrics of severity and persistence, that distinguishes between Deviation, Bounded LoC, and Strict LoC. We model pathways toward a societal state of vulnerability in which sufficiently advanced AI systems have acquired or could acquire the means to cause Bounded or Strict LoC once a catalyst, either misalignment or pure malfunction, materializes. We argue that this state becomes increasingly likely over time, absent strategic intervention, and propose a strategy to avoid reaching a state of vulnerability. Rather than focusing solely on intervening on AI capabilities and propensities potentially relevant for LoC or on preventing potential catalysts, we introduce a complementary framework that emphasizes three extrinsic factors: Deployment context, Affordances, and Permissions (the DAP framework). Compared to work on intrinsic factors and catalysts, this framework has the unfair advantage of being actionable today. Finally, we put forward a plan to maintain preparedness and prevent the occurrence of LoC outcomes should a state of societal vulnerability be reached, focusing on governance measures (threat modeling, deployment policies, emergency response) and technical controls (pre-deployment testing, control measures, monitoring) that could maintain a condition of perennial suspension.
- North America > United States > District of Columbia > Washington (0.14)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.14)
- North America > Puerto Rico (0.04)
- (21 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Applied AI (0.92)
Delivering securely on data and AI strategy
Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report . That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. "I'm passionate about cybersecurity not slowing us down," says Melody Hildebrandt, chief technology officer at Fox Corporation, "but I also own cybersecurity strategy.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.33)
An AI Capability Threshold for Rent-Funded Universal Basic Income in an AI-Automated Economy
We derive the first closed-form condition under which artificial intelligence (AI) capital profits could sustainably finance a universal basic income (UBI) without relying on new taxation or the creation of new jobs. In a Solow-Zeira task-automation economy with a CES aggregator $σ< 1$, we introduce an AI capability parameter that scales the productivity of automatable tasks and obtain a tractable expression for the AI capability threshold -- the minimum productivity of AI relative to pre-AI automation required for a balanced transfer. Using current U.S. economic parameters, we find that even in the conservative scenario where no new tasks or jobs emerge, AI systems would only need to reach only 5-7 times today's automation productivity to fund an 11%-of-GDP UBI. Our analysis also reveals some specific policy levers: raising public revenue share (e.g. profit taxation) of AI capital from the current 15% to about 33% halves the required AI capability threshold to attain UBI to 3 times existing automation productivity, but gains diminish beyond 50% public revenue share, especially if regulatory costs increase. Market structure also strongly affects outcomes: monopolistic or concentrated oligopolistic markets reduce the threshold by increasing economic rents, whereas heightened competition significantly raises it. These results therefore offer a rigorous benchmark for assessing when advancing AI capabilities might sustainably finance social transfers in an increasingly automated economy.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > Singapore (0.04)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Law > Taxation Law (0.68)
- Government > Tax (0.68)
The AI Risk Spectrum: From Dangerous Capabilities to Existential Threats
Grey, Markov, Segerie, Charbel-Raphaël
As AI systems become more capable, integrated, and widespread, understanding the associated risks becomes increasingly important. This paper maps the full spectrum of AI risks, from current harms affecting individual users to existential threats that could endanger humanity's survival. We organize these risks into three main causal categories. Misuse risks, which occur when people deliberately use AI for harmful purposes - creating bioweapons, launching cyberattacks, adversarial AI attacks or deploying lethal autonomous weapons. Misalignment risks happen when AI systems pursue outcomes that conflict with human values, irrespective of developer intentions. This includes risks arising through specification gaming (reward hacking), scheming and power-seeking tendencies in pursuit of long-term strategic goals. Systemic risks, which arise when AI integrates into complex social systems in ways that gradually undermine human agency - concentrating power, accelerating political and economic disempowerment, creating overdependence that leads to human enfeeblement, or irreversibly locking in current values curtailing future moral progress. Beyond these core categories, we identify risk amplifiers - competitive pressures, accidents, corporate indifference, and coordination failures - that make all risks more likely and severe. Throughout, we connect today's existing risks and empirically observable AI behaviors to plausible future outcomes, demonstrating how existing trends could escalate to catastrophic outcomes. Our goal is to help readers understand the complete landscape of AI risks. Good futures are possible, but they don't happen by default. Navigating these challenges will require unprecedented coordination, but an extraordinary future awaits if we do.
- North America > United States (1.00)
- Asia > China (0.14)
- Europe > Ukraine (0.14)
- (11 more...)
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (12 more...)
The power of AI at your fingertips
The demands of modern life can make it hard to stay on top of things. Just when you've made time to work on that creative project, suddenly there are emails that need dealing with, tasks to manage, or scheduling that requires immediate attention, all of which makes it hard to remain productive and inspired. Well, with the latest Intel Core Ultra powered Windows PC with AI capabilities there are a wealth of features and capabilities purpose-built to streamline your workload and free up time to spend on the things that are the most important to you. AI might feel like a buzzword that's plastered over everything at the moment, and in some cases if does seem like it offers much apart from basic party tricks. But Microsoft's CoPilot, powered by Intel Core Ultra processors, is an exception, as it offers plenty of very helpful tools and features that can speed up your workflow, maximise your time and help you stay focussed.
Technical Requirements for Halting Dangerous AI Activities
Barnett, Peter, Scher, Aaron, Abecassis, David
The rapid development of AI systems poses unprecedented risks, including loss of control, misuse, geopolitical instability, and concentration of power. To navigate these risks and avoid worst-case outcomes, governments may proactively establish the capability for a coordinated halt on dangerous AI development and deployment. In this paper, we outline key technical interventions that could allow for a coordinated halt on dangerous AI activities. We discuss how these interventions may contribute to restricting various dangerous AI activities, and show how these interventions can form the technical foundation for potential AI governance plans.
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > Canada (0.04)
- Information Technology > Security & Privacy (1.00)
- Government (0.94)
Anchoring AI Capabilities in Market Valuations: The Capability Realization Rate Model and Valuation Misalignment Risk
Fang, Xinmin, Tao, Lingfeng, Li, Zhengxiong
Recent breakthroughs in artificial intelligence (AI) have triggered surges in market valuations for AI-related companies, often outpacing the realization of underlying capabilities. We examine the anchoring effect of AI capabilities on equity valuations and propose a Capability Realization Rate (CRR) model to quantify the gap between AI potential and realized performance. Using data from the 2023--2025 generative AI boom, we analyze sector-level sensitivity and conduct case studies (OpenAI, Adobe, NVIDIA, Meta, Microsoft, Goldman Sachs) to illustrate patterns of valuation premium and misalignment. Our findings indicate that AI-native firms commanded outsized valuation premiums anchored to future potential, while traditional companies integrating AI experienced re-ratings subject to proof of tangible returns. We argue that CRR can help identify valuation misalignment risk-where market prices diverge from realized AI-driven value. We conclude with policy recommendations to improve transparency, mitigate speculative bubbles, and align AI innovation with sustainable market value.
- North America > United States > Colorado > Denver County > Denver (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > China (0.04)
- Financial News (0.93)
- Research Report (0.70)
- Information Technology > Services (1.00)
- Banking & Finance > Trading (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.59)
OpenAI's Unreleased AGI Paper Could Complicate Microsoft Negotiations
A small clause inside OpenAI's contract with Microsoft, once considered a distant hypothetical, has now become a flashpoint in one of the biggest partnerships in tech. The clause states that if OpenAI's board ever declares it has developed artificial general intelligence (AGI), it would limit Microsoft's contracted access to the startup's future technologies. Microsoft, which has invested more than 13 billion in OpenAI, is now reportedly pushing for the removal of the clause and is considering walking away from the deal entirely, according to the Financial Times. Late last year, tensions around AGI's suddenly pivotal role in the Microsoft deal spilled into a debate within OpenAI over an internal research paper, according to multiple sources familiar with the matter. Titled "Five Levels of General AI Capabilities," the paper outlines a framework for classifying progressive stages of AI technology.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Our favorite budget smart bird feeder is cheaper than it has been all year at Amazon
I put a bird feeder outside my window a few years ago and it was a fantastic decision. I can look out there and see a wide variety of feathered friends chowing down on the regionally appropriate bird food I provide for them. I also get to see squirrels acting foolish. Right now, Amazon has the Birdfy smart bird feeders on deep discount well before the Prime Day shopping holiday rolls around in early July. This is the easiest possible way to bird watch.
- Energy > Energy Storage (0.32)
- Electrical Industrial Apparatus (0.32)
Evaluating AI cyber capabilities with crowdsourced elicitation
Petrov, Artem, Volkov, Dmitrii
As AI systems become increasingly capable, understanding their offensive cyber potential is critical for informed governance and responsible deployment. However, it's hard to accurately bound their capabilities, and some prior evaluations dramatically underestimated them. The art of extracting maximum task-specific performance from AIs is called "AI elicitation", and today's safety organizations typically conduct it in-house. In this paper, we explore crowdsourcing elicitation efforts as an alternative to in-house elicitation work. We host open-access AI tracks at two Capture The Flag (CTF) competitions: AI vs. Humans (400 teams) and Cyber Apocalypse (8000 teams). The AI teams achieve outstanding performance at both events, ranking top-5% and top-10% respectively for a total of \$7500 in bounties. This impressive performance suggests that open-market elicitation may offer an effective complement to in-house elicitation. We propose elicitation bounties as a practical mechanism for maintaining timely, cost-effective situational awareness of emerging AI capabilities. Another advantage of open elicitations is the option to collect human performance data at scale. Applying METR's methodology, we found that AI agents can reliably solve cyber challenges requiring one hour or less of effort from a median human CTF participant.
- North America > United States > Texas > Kleberg County (0.04)
- North America > United States > Texas > Chambers County (0.04)
- Information Technology > Security & Privacy (0.69)
- Government > Military (0.67)